Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.127
Filtrar
1.
World J Surg ; 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38730536

RESUMEN

BACKGROUND: The burden of musculoskeletal conditions continues to grow in low- and middle-income countries. Among thousands of surgical outreach trips each year, few organizations electronically track patient data to inform real-time care decisions and assess trip impact. We report the implementation of an electronic health record (EHR) system utilized at point of care during an orthopedic surgical outreach trip. METHODS: In March 2023, we implemented an EHR on an orthopedic outreach trip to guide real-time care decisions. We utilized an effectiveness-implementation hybrid type 3 design to evaluate implementation success. Success was measured using outcomes adopted by the World Health Organization, including acceptability, appropriateness, feasibility, adoption, fidelity, and sustainability. Clinical outcome measures included adherence to essential quality measures and follow-up numerical rating system (NRS) pain scores. RESULTS: During the 5-day outreach trip, 76 patients were evaluated, 25 of which underwent surgery beforehand. The EHR implementation was successful as defined by: mean questionnaire ratings of acceptability (4.26), appropriateness (4.12), feasibility (4.19), and adoption (4.33) at least 4.00, WHO behaviorally anchored rating scale ratings of fidelity (6.8) at least 5.00, and sustainability (80%) at least 60% follow-up at 6 months. All clinical quality measures were reported in greater than 80% of cases with all measures reported in 92% of cases. NRS pain scores improved by an average of 2.4 points. CONCLUSIONS: We demonstrate successful implementation of an EHR for real-time clinical use on a surgical outreach trip. Benefits of EHR utilization on surgical outreach trips may include improved documentation, minimization of medical errors, and ultimately improved quality of care.

2.
JMIR Med Inform ; 12: e50164, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38717378

RESUMEN

Background: Tolvaptan is the only US Food and Drug Administration-approved drug to slow the progression of autosomal dominant polycystic kidney disease (ADPKD), but it requires strict clinical monitoring due to potential serious adverse events. Objective: We aimed to share our experience in developing and implementing an electronic health record (EHR)-based application to monitor patients with ADPKD who were initiated on tolvaptan. Methods: The application was developed in collaboration with clinical informatics professionals based on our clinical protocol with frequent laboratory test monitoring to detect early drug-related toxicity. The application streamlined the clinical workflow and enabled our nursing team to take appropriate actions in real time to prevent drug-related serious adverse events. We retrospectively analyzed the characteristics of the enrolled patients. Results: As of September 2022, a total of 214 patients were enrolled in the tolvaptan program across all Mayo Clinic sites. Of these, 126 were enrolled in the Tolvaptan Monitoring Registry application and 88 in the Past Tolvaptan Patients application. The mean age at enrollment was 43.1 (SD 9.9) years. A total of 20 (9.3%) patients developed liver toxicity, but only 5 (2.3%) had to discontinue the drug. The 2 EHR-based applications allowed consolidation of all necessary patient information and real-time data management at the individual or population level. This approach facilitated efficient staff workflow, monitoring of drug-related adverse events, and timely prescription renewal. Conclusions: Our study highlights the feasibility of integrating digital applications into the EHR workflow to facilitate efficient and safe care delivery for patients enrolled in a tolvaptan program. This workflow needs further validation but could be extended to other health care systems managing chronic diseases requiring drug monitoring.

3.
Cureus ; 16(4): e57672, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38707055

RESUMEN

Background and aim In 2005, the Moroccan Ministry of Health established Magredial, a registry to track and monitor patients with end-stage renal disease (ESRD), with the aim of improving healthcare outcomes. After achieving initial success, Magredial's activity decreased, leading to its inactivity by 2015. Currently, efforts are underway to revive Magredial's use. The main goal of this study is to investigate the feasibility of data transfer between the electronic medical records (EMRs) of Hassan II Hospital of Fes, Morocco, and the registry by achieving semantic interoperability between the two systems Materials and methods The initial phase of this study involved a detailed review of existing literature, highlighting the importance of registries, especially in nephrology. This part of the study also aims to emphasize the role of semantic interoperability in facilitating the sharing of data between EMRs and registries. Following that, the study's second phase, which centered on the case study, conducted a detailed analysis of the data architectures in both Magredial and the EMR of the nephrology department to pinpoint areas of alignment and discrepancy. This step required cooperative efforts between the nephrology and IT departments of Hassan II Hospital. Results Our findings indicate a significant interoperability gap between the two systems, stemming from differences in their data architectures and semantic frameworks. Such discrepancies severely impede the effective exchange of information between the systems. To address this challenge, a comprehensive restructuring of the EMR is proposed. This strategy is designed to align disparate systems and ensure compliance with the interoperability standards the Health Level 7 Clinical Document Architecture (HL7-CDA) set forth. Implementing the proposed medical record approach is complex and time-consuming, necessitating healthcare professional commitment, and adherence to ethical standards for patient consent and data privacy. Conclusions Implementing this strategy is expected to facilitate the seamless automation of data transfer between the EMR and Magredial. It introduces a framework that could be a foundational model for establishing a robust interoperability framework within nephrology information systems in line with international standards. Ultimately, this initiative could lead to creating a nephrologist-shared health record across the country, enhancing patient care and data management within the specialty.

4.
J Med Internet Res ; 26: e45593, 2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38743464

RESUMEN

BACKGROUND: The use of triage systems such as the Manchester Triage System (MTS) is a standard procedure to determine the sequence of treatment in emergency departments (EDs). When using the MTS, time targets for treatment are determined. These are commonly displayed in the ED information system (EDIS) to ED staff. Using measurements as targets has been associated with a decline in meeting those targets. OBJECTIVE: This study investigated the impact of displaying time targets for treatment to physicians on processing times in the ED. METHODS: We analyzed the effects of displaying time targets to ED staff on waiting times in a prospective crossover study, during the introduction of a new EDIS in a large regional hospital in Germany. The old information system version used a module that showed the time target determined by the MTS, while the new system version used a priority list instead. Evaluation was based on 35,167 routinely collected electronic health records from the preintervention period and 10,655 records from the postintervention period. Electronic health records were extracted from the EDIS, and data were analyzed using descriptive statistics and generalized additive models. We evaluated the effects of the intervention on waiting times and the odds of achieving timely treatment according to the time targets set by the MTS. RESULTS: The average ED length of stay and waiting times increased when the EDIS that did not display time targets was used (average time from admission to treatment: preintervention phase=median 15, IQR 6-39 min; postintervention phase=median 11, IQR 5-23 min). However, severe cases with high acuity (as indicated by the triage score) benefited from lower waiting times (0.15 times as high as in the preintervention period for MTS1, only 0.49 as high for MTS2). Furthermore, these patients were less likely to receive delayed treatment, and we observed reduced odds of late treatment when crowding occurred. CONCLUSIONS: Our results suggest that it is beneficial to use a priority list instead of displaying time targets to ED personnel. These time targets may lead to false incentives. Our work highlights that working better is not the same as working faster.


Asunto(s)
Estudios Cruzados , Servicio de Urgencia en Hospital , Triaje , Triaje/métodos , Triaje/estadística & datos numéricos , Humanos , Servicio de Urgencia en Hospital/estadística & datos numéricos , Estudios Prospectivos , Femenino , Masculino , Factores de Tiempo , Alemania , Persona de Mediana Edad , Adulto , Anciano
5.
Kidney Int Rep ; 9(5): 1244-1253, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38707795

RESUMEN

Introduction: Even with effective vaccines, patients with CKD have a higher risk of hospitalization and death subsequent to COVID-19 infection than those without CKD. Molnupiravir and nirmatrelvir-ritonavir have been approved for emergency use, but their effectiveness for the CKD population is still unknown. This study was conducted to determine the effectiveness of these drugs in reducing mortality and severe COVID-19 in the CKD population. Methods: This was a target trial emulation study using electronic health databases in Hong Kong. Patients with CKD aged 18 years or older who were hospitalized with COVID-19 were included. The per-protocol average treatment effect among COVID-19 oral antiviral initiators, including all-cause mortality, intensive care unit (ICU) admission, and ventilatory support within 28 days, were compared to noninitiators. Results: Antivirals have been found to lower the risk of all-cause mortality, with Molnupiravir at a hazard ratio (HR) of 0.85 (95% confidence interval [CI], 0.77 to 0.95] and nirmatrelvir-ritonavir at an HR of 0.78 [95% CI, 0.60 to 1.00]. However, they do not significantly reduce the risk of ICU admission (molnupiravir: HR, 0.88 [95% CI, 0.59 to 1.30]; nirmatrelvir-ritonavir: HR, 0.86 [95% CI, 0.56 to 1.32]) or ventilatory support (molnupiravir: HR, 1.00 [95% CI, 0.76 to 1.33]; nirmatrelvir-ritonavir: HR, 1.01 [95% CI, 0.74 to 1.37]). There was a greater risk reduction in males and those with higher Charlson Comorbidity Index (CCI). The nirmatrelvir-ritonavir trial also showed reduced risk for those who had antiviral treatment and received 3 or more vaccine doses. Conclusion: Both molnupiravir and nirmatrelvir-ritonavir reduced mortality rates for hospitalized COVID-19 patients with CKD.

6.
JMIR Med Inform ; 12: e51842, 2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38722209

RESUMEN

Background: Numerous pressure injury prediction models have been developed using electronic health record data, yet hospital-acquired pressure injuries (HAPIs) are increasing, which demonstrates the critical challenge of implementing these models in routine care. Objective: To help bridge the gap between development and implementation, we sought to create a model that was feasible, broadly applicable, dynamic, actionable, and rigorously validated and then compare its performance to usual care (ie, the Braden scale). Methods: We extracted electronic health record data from 197,991 adult hospital admissions with 51 candidate features. For risk prediction and feature selection, we used logistic regression with a least absolute shrinkage and selection operator (LASSO) approach. To compare the model with usual care, we used the area under the receiver operating curve (AUC), Brier score, slope, intercept, and integrated calibration index. The model was validated using a temporally staggered cohort. Results: A total of 5458 HAPIs were identified between January 2018 and July 2022. We determined 22 features were necessary to achieve a parsimonious and highly accurate model. The top 5 features included tracheostomy, edema, central line, first albumin measure, and age. Our model achieved higher discrimination than the Braden scale (AUC 0.897, 95% CI 0.893-0.901 vs AUC 0.798, 95% CI 0.791-0.803). Conclusions: We developed and validated an accurate prediction model for HAPIs that surpassed the standard-of-care risk assessment and fulfilled necessary elements for implementation. Future work includes a pragmatic randomized trial to assess whether our model improves patient outcomes.

7.
J Gen Intern Med ; 2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38717666

RESUMEN

BACKGROUND: Physicians are experiencing an increasing burden of messaging within the electronic health record (EHR) inbox. Studies have called for the implementation of tools and resources to mitigate this burden, but few studies have evaluated how these interventions impact time spent on inbox activities. OBJECTIVE: Explore the association between existing EHR efficiency tools and clinical resources on primary care physician (PCP) inbox time. DESIGN: Retrospective, cross-sectional study of inbox time among PCPs in network clinics affiliated with an academic health system. PARTICIPANTS: One hundred fifteen community-based PCPs. MAIN MEASURES: Inbox time, in hours, normalized to eight physician scheduled hours (IB-Time8). KEY RESULTS: Following adjustment for physician sex as well as panel size, age, and morbidity, we observed no significant differences in inbox time for physicians with and without message triage, custom inbox QuickActions, encounter specialists, and message pools. Moreover, IB-Time8 increased by 0.01 inbox hours per eight scheduled hours for each additional staff member resource in a physician's practice (p = 0.03). CONCLUSIONS: Physician inbox time was not associated with existing EHR efficiency tools evaluated in this study. Yet, there may be a slight increase in inbox time among physicians in practices with larger teams.

8.
JMIR Form Res ; 8: e46420, 2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38696775

RESUMEN

BACKGROUND: Electronic health records (EHRs) are a cost-effective approach to provide the necessary foundations for clinical trial research. The ability to use EHRs in real-world clinical settings allows for pragmatic approaches to intervention studies with the emerging adult HIV population within these settings; however, the regulatory components related to the use of EHR data in multisite clinical trials poses unique challenges that researchers may find themselves unprepared to address, which may result in delays in study implementation and adversely impact study timelines, and risk noncompliance with established guidance. OBJECTIVE: As part of the larger Adolescent Trials Network (ATN) for HIV/AIDS Interventions Protocol 162b (ATN 162b) study that evaluated clinical-level outcomes of an intervention including HIV treatment and pre-exposure prophylaxis services to improve retention within the emerging adult HIV population, the objective of this study is to highlight the regulatory process and challenges in the implementation of a multisite pragmatic trial using EHRs to assist future researchers conducting similar studies in navigating the often time-consuming regulatory process and ensure compliance with adherence to study timelines and compliance with institutional and sponsor guidelines. METHODS: Eight sites were engaged in research activities, with 4 sites selected from participant recruitment venues as part of the ATN, who participated in the intervention and data extraction activities, and an additional 4 sites were engaged in data management and analysis. The ATN 162b protocol team worked with site personnel to establish the necessary regulatory infrastructure to collect EHR data to evaluate retention in care and viral suppression, as well as para-data on the intervention component to assess the feasibility and acceptability of the mobile health intervention. Methods to develop this infrastructure included site-specific training activities and the development of both institutional reliance and data use agreements. RESULTS: Due to variations in site-specific activities, and the associated regulatory implications, the study team used a phased approach with the data extraction sites as phase 1 and intervention sites as phase 2. This phased approach was intended to address the unique regulatory needs of all participating sites to ensure that all sites were properly onboarded and all regulatory components were in place. Across all sites, the regulatory process spanned 6 months for the 4 data extraction and intervention sites, and up to 10 months for the data management and analysis sites. CONCLUSIONS: The process for engaging in multisite clinical trial studies using EHR data is a multistep, collaborative effort that requires proper advanced planning from the proposal stage to adequately implement the necessary training and infrastructure. Planning, training, and understanding the various regulatory aspects, including the necessity of data use agreements, reliance agreements, external institutional review board review, and engagement with clinical sites, are foremost considerations to ensure successful implementation and adherence to pragmatic trial timelines and outcomes.

9.
Mhealth ; 10: 14, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38689616

RESUMEN

Background: The integration of real-time data (RTD) in the electronic health records (EHRs) is transforming the healthcare of tomorrow. In this work, the common scenarios of capturing RTD in the healthcare from EHRs are studied and the approaches and tools to implement real-time solutions are investigated. Methods: Delivering RTD by representational state transfer (REST) application programming interfaces (APIs) is usually accomplished through a Publish-Subscribe approach. Common technologies and protocols used for implementing subscriptions are REST hooks and WebSockets. Polling is a straightforward mechanism for obtaining updates; nevertheless, it may not be the most efficient or scalable solution. In such cases, other approaches are often preferred. Database triggers and reverse proxies can be useful in RTD scenarios; however, they should be designed carefully to avoid performance bottlenecks and potential issues. Results: The implementation of subscriptions through REST hooks and WebSocket notifications using a Fast Healthcare Interoperability Resources (FHIR) REST API, as well as the design of a reverse proxy and database triggers is described. Reference implementations of the solutions are provided in a GitHub repository. For the reverse proxy implementation, the Go language (Golang) was used, which is specialized for the development of server-side networking applications. For FHIR servers a python script is provided to create a sample Subscription resource to send RTD when a new Observation resource for specific patient id is created. The sample WebSocket client is written using the "websocket-client" python library. The sample RTD endpoint is created using the "Flask" framework. For database triggers a sample structured query language (SQL) query for Postgres to create a trigger when an INSERT or UPDATE operation is executed on the FHIR resource table is available. Furthermore, a use case clinical example, where the main actors are the healthcare providers (hospitals, physician private practices, general practitioners and medical laboratories), health information networks and the patient are drawn. The RTD flow and exchange is shown in detail and how it could improve healthcare. Conclusions: Capturing RTD is undoubtedly vital for health professionals and successful digital healthcare. The topic remains unexplored especially in the context of EHRs. In our work for the first time the common scenarios and problems are investigated. Furthermore, solutions and reference implementations are provided which could support and contribute to the development of real-time applications.

10.
Clin Kidney J ; 17(5): sfae098, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38737345

RESUMEN

Background: Chronic kidney disease (CKD) is a major global health problem and its early identification would allow timely intervention to reduce complications. We performed a systematic review and meta-analysis of multivariable prediction models derived and/or validated in community-based electronic health records (EHRs) for the prediction of incident CKD in the community. Methods: Ovid Medline and Ovid Embase were searched for records from 1947 to 31 January 2024. Measures of discrimination were extracted and pooled by Bayesian meta-analysis, with heterogeneity assessed through a 95% prediction interval (PI). Risk of bias was assessed using Prediction model Risk Of Bias ASsessment Tool (PROBAST) and certainty in effect estimates by Grading of Recommendations, Assessment, Development and Evaluation (GRADE). Results: Seven studies met inclusion criteria, describing 12 prediction models, with two eligible for meta-analysis including 2 173 202 patients. The Chronic Kidney Disease Prognosis Consortium (CKD-PC) (summary c-statistic 0.847; 95% CI 0.827-0.867; 95% PI 0.780-0.905) and SCreening for Occult REnal Disease (SCORED) (summary c-statistic 0.811; 95% CI 0.691-0.926; 95% PI 0.514-0.992) models had good model discrimination performance. Risk of bias was high in 64% of models, and driven by the analysis domain. No model met eligibility for meta-analysis if studies at high risk of bias were excluded, and certainty of effect estimates was 'low'. No clinical utility analyses or clinical impact studies were found for any of the models. Conclusions: Models derived and/or externally validated for prediction of incident CKD in community-based EHRs demonstrate good prediction performance, but assessment of clinical usefulness is limited by high risk of bias, low certainty of evidence and a lack of impact studies.

11.
Cureus ; 16(4): e58032, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38738104

RESUMEN

Electronic health record (EHR) systems have developed over time in parallel with general advancements in mainstream technology. As artificially intelligent (AI) systems rapidly impact multiple societal sectors, it has become apparent that medicine is not immune from the influences of this powerful technology. Particularly appealing is how AI may aid in improving healthcare efficiency with note-writing automation. This literature review explores the current state of EHR technologies in healthcare, specifically focusing on possibilities for addressing EHR challenges through the automation of dictation and note-writing processes with AI integration. This review offers a broad understanding of existing capabilities and potential advancements, emphasizing innovations such as voice-to-text dictation, wearable devices, and AI-assisted procedure note dictation. The primary objective is to provide researchers with valuable insights, enabling them to generate new technologies and advancements within the healthcare landscape. By exploring the benefits, challenges, and future of AI integration, this review encourages the development of innovative solutions, with the goal of enhancing patient care and healthcare delivery efficiency.

12.
JMIR Med Inform ; 12: e53075, 2024 Apr 18.
Artículo en Inglés | MEDLINE | ID: mdl-38632712

RESUMEN

Background: Pseudonymization has become a best practice to securely manage the identities of patients and study participants in medical research projects and data sharing initiatives. This method offers the advantage of not requiring the direct identification of data to support various research processes while still allowing for advanced processing activities, such as data linkage. Often, pseudonymization and related functionalities are bundled in specific technical and organization units known as trusted third parties (TTPs). However, pseudonymization can significantly increase the complexity of data management and research workflows, necessitating adequate tool support. Common tasks of TTPs include supporting the secure registration and pseudonymization of patient and sample identities as well as managing consent. Objective: Despite the challenges involved, little has been published about successful architectures and functional tools for implementing TTPs in large university hospitals. The aim of this paper is to fill this research gap by describing the software architecture and tool set developed and deployed as part of a TTP established at Charité - Universitätsmedizin Berlin. Methods: The infrastructure for the TTP was designed to provide a modular structure while keeping maintenance requirements low. Basic functionalities were realized with the free MOSAIC tools. However, supporting common study processes requires implementing workflows that span different basic services, such as patient registration, followed by pseudonym generation and concluded by consent collection. To achieve this, an integration layer was developed to provide a unified Representational state transfer (REST) application programming interface (API) as a basis for more complex workflows. Based on this API, a unified graphical user interface was also implemented, providing an integrated view of information objects and workflows supported by the TTP. The API was implemented using Java and Spring Boot, while the graphical user interface was implemented in PHP and Laravel. Both services use a shared Keycloak instance as a unified management system for roles and rights. Results: By the end of 2022, the TTP has already supported more than 10 research projects since its launch in December 2019. Within these projects, more than 3000 identities were stored, more than 30,000 pseudonyms were generated, and more than 1500 consent forms were submitted. In total, more than 150 people regularly work with the software platform. By implementing the integration layer and the unified user interface, together with comprehensive roles and rights management, the effort for operating the TTP could be significantly reduced, as personnel of the supported research projects can use many functionalities independently. Conclusions: With the architecture and components described, we created a user-friendly and compliant environment for supporting research projects. We believe that the insights into the design and implementation of our TTP can help other institutions to efficiently and effectively set up corresponding structures.

13.
JMIR Hum Factors ; 11: e52625, 2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38598271

RESUMEN

BACKGROUND: The rollout of the electronic health record (EHR) represents a central component of the digital transformation of the German health care system. Although the EHR promises more effective, safer, and faster treatment of patients from a systems perspective, the successful implementation of the EHR largely depends on the patient. In a recent survey, 3 out of 4 Germans stated that they intend to use the EHR, whereas other studies show that the intention to use a technology is not a reliable and sufficient predictor of actual use. OBJECTIVE: Controlling for patients' intention to use the EHR, we investigated whether disease-specific risk perceptions related to the time course of the disease and disease-related stigma explain the additional variance in patients' decisions to upload medical reports to the EHR. METHODS: In an online user study, 241 German participants were asked to interact with a randomly assigned medical report that varied systematically in terms of disease-related stigma (high vs low) and disease time course (acute vs chronic) and to decide whether to upload it to the EHR. RESULTS: Disease-related stigma (odds ratio 0.154, P<.001) offset the generally positive relationship between intention to use and the upload decision (odds ratio 2.628, P<.001), whereas the disease time course showed no effect. CONCLUSIONS: Even if patients generally intend to use the EHR, risk perceptions such as those related to diseases associated with social stigma may deter people from uploading related medical reports to the EHR. To ensure the reliable use of this key technology in a digitalized health care system, transparent and easy-to-comprehend information about the safety standards of the EHR are warranted across the board, even for populations that are generally in favor of using the EHR.


Asunto(s)
Registros Electrónicos de Salud , Estigma Social , Humanos , Progresión de la Enfermedad , Pueblo Europeo
14.
medRxiv ; 2024 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-38585743

RESUMEN

Background: Electronic health records (EHR) are increasingly used for studying multimorbidities. However, concerns about accuracy, completeness, and EHRs being primarily designed for billing and administration raise questions about the consistency and reproducibility of EHR-based multimorbidity research. Methods: Utilizing phecodes to represent the disease phenome, we analyzed pairwise comorbidity strengths using a dual logistic regression approach and constructed multimorbidity as an undirected weighted graph. We assessed the consistency of the multimorbidity networks within and between two major EHR systems at local (nodes and edges), meso (neighboring patterns), and global (network statistics) scales. We present case studies to identify disease clusters and uncover clinically interpretable disease relationships. We provide an interactive web tool and a knowledge base combing data from multiple sources for online multimorbidity analysis. Findings: Analyzing data from 500,000 patients across Vanderbilt University Medical Center and Mass General Brigham health systems, we observed a strong correlation in disease frequencies ( Kendall's τ = 0.643) and comorbidity strengths (Pearson ρ = 0.79). Consistent network statistics across EHRs suggest a similar structure of multimorbidity networks at various scales. Comorbidity strengths and similarities of multimorbidity connection patterns align with the disease genetic correlations. Graph-theoretic analyses revealed a consistent core-periphery structure, implying efficient network clustering through threshold graph construction. Using hydronephrosis as a case study, we demonstrated the network's ability to uncover clinically relevant disease relationships and provide novel insights. Interpretation: Our findings demonstrate the robustness of large-scale EHR data for studying complex disease interactions. The alignment of multimorbidity patterns with genetic data suggests the potential utility for uncovering shared etiology of diseases. The consistent core-periphery network structure offers a strategic approach to analyze disease clusters. This work also sets the stage for advanced disease modeling, with implications for precision medicine. Funding: VUMC Biostatistics Development Award, UL1 TR002243, R21DK127075, R01HL140074, P50GM115305, R01CA227481.

15.
JMIR Cardio ; 8: e53091, 2024 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-38648629

RESUMEN

BACKGROUND: Cardiovascular conditions (eg, cardiac and coronary conditions, hypertensive disorders of pregnancy, and cardiomyopathies) were the leading cause of maternal mortality between 2017 and 2019. The United States has the highest maternal mortality rate of any high-income nation, disproportionately impacting those who identify as non-Hispanic Black or Hispanic. Novel clinical approaches to the detection and diagnosis of cardiovascular conditions are therefore imperative. Emerging research is demonstrating that machine learning (ML) is a promising tool for detecting patients at increased risk for hypertensive disorders during pregnancy. However, additional studies are required to determine how integrating ML and big data, such as electronic health records (EHRs), can improve the identification of obstetric patients at higher risk of cardiovascular conditions. OBJECTIVE: This study aimed to evaluate the capability and timing of a proprietary ML algorithm, Healthy Outcomes for all Pregnancy Experiences-Cardiovascular-Risk Assessment Technology (HOPE-CAT), to detect maternal-related cardiovascular conditions and outcomes. METHODS: Retrospective data from the EHRs of a large health care system were investigated by HOPE-CAT in a virtual server environment. Deidentification of EHR data and standardization enabled HOPE-CAT to analyze data without pre-existing biases. The ML algorithm assessed risk factors selected by clinical experts in cardio-obstetrics, and the algorithm was iteratively trained using relevant literature and current standards of risk identification. After refinement of the algorithm's learned risk factors, risk profiles were generated for every patient including a designation of standard versus high risk. The profiles were individually paired with clinical outcomes pertaining to cardiovascular pregnancy conditions and complications, wherein a delta was calculated between the date of the risk profile and the actual diagnosis or intervention in the EHR. RESULTS: In total, 604 pregnancies resulting in birth had records or diagnoses that could be compared against the risk profile; the majority of patients identified as Black (n=482, 79.8%) and aged between 21 and 34 years (n=509, 84.4%). Preeclampsia (n=547, 90.6%) was the most common condition, followed by thromboembolism (n=16, 2.7%) and acute kidney disease or failure (n=13, 2.2%). The average delta was 56.8 (SD 69.7) days between the identification of risk factors by HOPE-CAT and the first date of diagnosis or intervention of a related condition reported in the EHR. HOPE-CAT showed the strongest performance in early risk detection of myocardial infarction at a delta of 65.7 (SD 81.4) days. CONCLUSIONS: This study provides additional evidence to support ML in obstetrical patients to enhance the early detection of cardiovascular conditions during pregnancy. ML can synthesize multiday patient presentations to enhance provider decision-making and potentially reduce maternal health disparities.

16.
Thorac Cancer ; 15(14): 1187-1194, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38576119

RESUMEN

INTRODUCTION: Restrictive eligibility criteria in cancer drug trials result in low enrollment rates and limited population diversity. Relaxed eligibility criteria (REC) based on solid evidence is becoming necessary for stakeholders worldwide. However, the absence of high-quality, favorable evidence remains a major challenge. This study presents a protocol to quantitatively evaluate the impact of relaxing eligibility criteria in common non-small cell lung cancer (NSCLC) protocols in China, on the risk-benefit profile. This involves a detailed explanation of the rationale, framework, and design of REC. METHODS: To evaluate our REC in NSCLC drug trials, we will first construct a structured, cross-dimensional real-world NSCLC database using deep learning methods. We will then establish randomized virtual cohorts and perform benefit-risk assessment using Monte Carlo simulation and propensity matching. Shapley value will be utilized to quantitatively measure the effect of the change of each eligibility criterion on patient volume, clinical efficacy and safety. DISCUSSION: This study is one of the few that focuses on the problem of overly stringent eligibility criteria cancer drug clinical trials, providing quantitative evaluation of the effect of relaxing each NSCLC eligibility criterion. This study will not only provide scientific evidence for the rational design of population inclusion in lung cancer clinical trials, but also establish a data governance system, as well as a REC evaluation framework that can be generalized to other cancer studies.


Asunto(s)
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/tratamiento farmacológico , Medición de Riesgo/métodos , Carcinoma de Pulmón de Células no Pequeñas/tratamiento farmacológico , Antineoplásicos/uso terapéutico , Selección de Paciente , China , Determinación de la Elegibilidad/métodos
17.
J Biomed Inform ; 153: 104643, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38621640

RESUMEN

OBJECTIVE: Health inequities can be influenced by demographic factors such as race and ethnicity, proficiency in English, and biological sex. Disparities may manifest as differential likelihood of testing which correlates directly with the likelihood of an intervention to address an abnormal finding. Our retrospective observational study evaluated the presence of variation in glucose measurements in the Intensive Care Unit (ICU). METHODS: Using the MIMIC-IV database (2008-2019), a single-center, academic referral hospital in Boston (USA), we identified adult patients meeting sepsis-3 criteria. Exclusion criteria were diabetic ketoacidosis, ICU length of stay under 1 day, and unknown race or ethnicity. We performed a logistic regression analysis to assess differential likelihoods of glucose measurements on day 1. A negative binomial regression was fitted to assess the frequency of subsequent glucose readings. Analyses were adjusted for relevant clinical confounders, and performed across three disparity proxy axes: race and ethnicity, sex, and English proficiency. RESULTS: We studied 24,927 patients, of which 19.5% represented racial and ethnic minority groups, 42.4% were female, and 9.8% had limited English proficiency. No significant differences were found for glucose measurement on day 1 in the ICU. This pattern was consistent irrespective of the axis of analysis, i.e. race and ethnicity, sex, or English proficiency. Conversely, subsequent measurement frequency revealed potential disparities. Specifically, males (incidence rate ratio (IRR) 1.06, 95% confidence interval (CI) 1.01 - 1.21), patients who identify themselves as Hispanic (IRR 1.11, 95% CI 1.01 - 1.21), or Black (IRR 1.06, 95% CI 1.01 - 1.12), and patients being English proficient (IRR 1.08, 95% CI 1.01 - 1.15) had higher chances of subsequent glucose readings. CONCLUSION: We found disparities in ICU glucose measurements among patients with sepsis, albeit the magnitude was small. Variation in disease monitoring is a source of data bias that may lead to spurious correlations when modeling health data.


Asunto(s)
Glucemia , Unidades de Cuidados Intensivos , Humanos , Masculino , Unidades de Cuidados Intensivos/estadística & datos numéricos , Femenino , Glucemia/análisis , Persona de Mediana Edad , Estudios Retrospectivos , Anciano , Adulto , Etnicidad/estadística & datos numéricos
18.
Patterns (N Y) ; 5(4): 100951, 2024 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-38645764

RESUMEN

The COVID-19 pandemic highlighted the need for predictive deep-learning models in health care. However, practical prediction task design, fair comparison, and model selection for clinical applications remain a challenge. To address this, we introduce and evaluate two new prediction tasks-outcome-specific length-of-stay and early-mortality prediction for COVID-19 patients in intensive care-which better reflect clinical realities. We developed evaluation metrics, model adaptation designs, and open-source data preprocessing pipelines for these tasks while also evaluating 18 predictive models, including clinical scoring methods and traditional machine-learning, basic deep-learning, and advanced deep-learning models, tailored for electronic health record (EHR) data. Benchmarking results from two real-world COVID-19 EHR datasets are provided, and all results and trained models have been released on an online platform for use by clinicians and researchers. Our efforts contribute to the advancement of deep-learning and machine-learning research in pandemic predictive modeling.

19.
JMIR Med Inform ; 12: e51171, 2024 Apr 04.
Artículo en Inglés | MEDLINE | ID: mdl-38596848

RESUMEN

Background: With the capability to render prediagnoses, consumer wearables have the potential to affect subsequent diagnoses and the level of care in the health care delivery setting. Despite this, postmarket surveillance of consumer wearables has been hindered by the lack of codified terms in electronic health records (EHRs) to capture wearable use. Objective: We sought to develop a weak supervision-based approach to demonstrate the feasibility and efficacy of EHR-based postmarket surveillance on consumer wearables that render atrial fibrillation (AF) prediagnoses. Methods: We applied data programming, where labeling heuristics are expressed as code-based labeling functions, to detect incidents of AF prediagnoses. A labeler model was then derived from the predictions of the labeling functions using the Snorkel framework. The labeler model was applied to clinical notes to probabilistically label them, and the labeled notes were then used as a training set to fine-tune a classifier called Clinical-Longformer. The resulting classifier identified patients with an AF prediagnosis. A retrospective cohort study was conducted, where the baseline characteristics and subsequent care patterns of patients identified by the classifier were compared against those who did not receive a prediagnosis. Results: The labeler model derived from the labeling functions showed high accuracy (0.92; F1-score=0.77) on the training set. The classifier trained on the probabilistically labeled notes accurately identified patients with an AF prediagnosis (0.95; F1-score=0.83). The cohort study conducted using the constructed system carried enough statistical power to verify the key findings of the Apple Heart Study, which enrolled a much larger number of participants, where patients who received a prediagnosis tended to be older, male, and White with higher CHA2DS2-VASc (congestive heart failure, hypertension, age ≥75 years, diabetes, stroke, vascular disease, age 65-74 years, sex category) scores (P<.001). We also made a novel discovery that patients with a prediagnosis were more likely to use anticoagulants (525/1037, 50.63% vs 5936/16,560, 35.85%) and have an eventual AF diagnosis (305/1037, 29.41% vs 262/16,560, 1.58%). At the index diagnosis, the existence of a prediagnosis did not distinguish patients based on clinical characteristics, but did correlate with anticoagulant prescription (P=.004 for apixaban and P=.01 for rivaroxaban). Conclusions: Our work establishes the feasibility and efficacy of an EHR-based surveillance system for consumer wearables that render AF prediagnoses. Further work is necessary to generalize these findings for patient populations at other sites.

20.
JMIR Form Res ; 8: e52920, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38557671

RESUMEN

BACKGROUND: The COVID-19 pandemic added to the decades of evidence that public health institutions are routinely stretched beyond their capacity. Community health workers (CHWs) can be a crucial extension of public health resources to address health inequities, but systems to document CHW efforts are often fragmented and prone to unneeded redundancy, errors, and inefficiency. OBJECTIVE: We sought to develop a more efficient data collection system for recording the wide range of community-based efforts performed by CHWs. METHODS: The Communities Organizing to Promote Equity (COPE) project is an initiative to address health disparities across Kansas, in part, through the deployment of CHWs. Our team iteratively designed and refined the features of a novel data collection system for CHWs. Pilot tests with CHWs occurred over several months to ensure that the functionality supported their daily use. Following implementation of the database, procedures were set to sustain the collection of feedback from CHWs, community partners, and organizations with similar systems to continually modify the database to meet the needs of users. A continuous quality improvement process was conducted monthly to evaluate CHW performance; feedback was exchanged at team and individual levels regarding the continuous quality improvement results and opportunities for improvement. Further, a 15-item feedback survey was distributed to all 33 COPE CHWs and supervisors for assessing the feasibility of database features, accessibility, and overall satisfaction. RESULTS: At launch, the database had 60 active users in 20 counties. Documented client interactions begin with needs assessments (modified versions of the Arizona Self-sufficiency Matrix and PRAPARE [Protocol for Responding to and Assessing Patient Assets, Risks, and Experiences]) and continue with the longitudinal tracking of progress toward goals. A user-specific automated alerts-based dashboard displays clients needing follow-up and upcoming events. The database contains over 55,000 documented encounters across more than 5079 clients. Available resources from over 2500 community organizations have been documented. Survey data indicated that 84% (27/32) of the respondents considered the overall navigation of the database as very easy. The majority of the respondents indicated they were overall very satisfied (14/32, 44%) or satisfied (15/32, 48%) with the database. Open-ended responses indicated the database features, documentation of community organizations and visual confirmation of consent form and data storage on a Health Insurance Portability and Accountability Act-compliant record system, improved client engagement, enrollment processes, and identification of resources. CONCLUSIONS: Our database extends beyond conventional electronic medical records and provides flexibility for ever-changing needs. The COPE database provides real-world data on CHW accomplishments, thereby improving the uniformity of data collection to enhance monitoring and evaluation. This database can serve as a model for community-based documentation systems and be adapted for use in other community settings.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...